70 research outputs found

    Simpler and Better Algorithms for Minimum-Norm Load Balancing

    Get PDF
    Recently, Chakrabarty and Swamy (STOC 2019) introduced the minimum-norm load-balancing problem on unrelated machines, wherein we are given a set J of jobs that need to be scheduled on a set of m unrelated machines, and a monotone, symmetric norm; We seek an assignment sigma: J -> [m] that minimizes the norm of the resulting load vector load_{sigma} in R_+^m, where load_{sigma}(i) is the load on machine i under the assignment sigma. Besides capturing all l_p norms, symmetric norms also capture other norms of interest including top-l norms, and ordered norms. Chakrabarty and Swamy (STOC 2019) give a (38+epsilon)-approximation algorithm for this problem via a general framework they develop for minimum-norm optimization that proceeds by first carefully reducing this problem (in a series of steps) to a problem called min-max ordered load balancing, and then devising a so-called deterministic oblivious LP-rounding algorithm for ordered load balancing. We give a direct, and simple 4+epsilon-approximation algorithm for the minimum-norm load balancing based on rounding a (near-optimal) solution to a novel convex-programming relaxation for the problem. Whereas the natural convex program encoding minimum-norm load balancing problem has a large non-constant integrality gap, we show that this issue can be remedied by including a key constraint that bounds the "norm of the job-cost vector." Our techniques also yield a (essentially) 4-approximation for: (a) multi-norm load balancing, wherein we are given multiple monotone symmetric norms, and we seek an assignment respecting a given budget for each norm; (b) the best simultaneous approximation factor achievable for all symmetric norms for a given instance

    Welfare Maximization and Truthfulness in Mechanism Design with Ordinal Preferences

    Full text link
    We study mechanism design problems in the {\em ordinal setting} wherein the preferences of agents are described by orderings over outcomes, as opposed to specific numerical values associated with them. This setting is relevant when agents can compare outcomes, but aren't able to evaluate precise utilities for them. Such a situation arises in diverse contexts including voting and matching markets. Our paper addresses two issues that arise in ordinal mechanism design. To design social welfare maximizing mechanisms, one needs to be able to quantitatively measure the welfare of an outcome which is not clear in the ordinal setting. Second, since the impossibility results of Gibbard and Satterthwaite~\cite{Gibbard73,Satterthwaite75} force one to move to randomized mechanisms, one needs a more nuanced notion of truthfulness. We propose {\em rank approximation} as a metric for measuring the quality of an outcome, which allows us to evaluate mechanisms based on worst-case performance, and {\em lex-truthfulness} as a notion of truthfulness for randomized ordinal mechanisms. Lex-truthfulness is stronger than notions studied in the literature, and yet flexible enough to admit a rich class of mechanisms {\em circumventing classical impossibility results}. We demonstrate the usefulness of the above notions by devising lex-truthful mechanisms achieving good rank-approximation factors, both in the general ordinal setting, as well as structured settings such as {\em (one-sided) matching markets}, and its generalizations, {\em matroid} and {\em scheduling} markets.Comment: Some typos correcte

    The Non-Uniform k-Center Problem

    Get PDF
    In this paper, we introduce and study the Non-Uniform k-Center problem (NUkC). Given a finite metric space (X,d)(X,d) and a collection of balls of radii {r1≥⋯≥rk}\{r_1\geq \cdots \ge r_k\}, the NUkC problem is to find a placement of their centers on the metric space and find the minimum dilation α\alpha, such that the union of balls of radius α⋅ri\alpha\cdot r_i around the iith center covers all the points in XX. This problem naturally arises as a min-max vehicle routing problem with fleets of different speeds. The NUkC problem generalizes the classic kk-center problem when all the kk radii are the same (which can be assumed to be 11 after scaling). It also generalizes the kk-center with outliers (kCwO) problem when there are kk balls of radius 11 and ℓ\ell balls of radius 00. There are 22-approximation and 33-approximation algorithms known for these problems respectively; the former is best possible unless P=NP and the latter remains unimproved for 15 years. We first observe that no O(1)O(1)-approximation is to the optimal dilation is possible unless P=NP, implying that the NUkC problem is more non-trivial than the above two problems. Our main algorithmic result is an (O(1),O(1))(O(1),O(1))-bi-criteria approximation result: we give an O(1)O(1)-approximation to the optimal dilation, however, we may open Θ(1)\Theta(1) centers of each radii. Our techniques also allow us to prove a simple (uni-criteria), optimal 22-approximation to the kCwO problem improving upon the long-standing 33-factor. Our main technical contribution is a connection between the NUkC problem and the so-called firefighter problems on trees which have been studied recently in the TCS community.Comment: Adjusted the figur

    Integrality Gap of the Hypergraphic Relaxation of Steiner Trees: a short proof of a 1.55 upper bound

    Full text link
    Recently Byrka, Grandoni, Rothvoss and Sanita (at STOC 2010) gave a 1.39-approximation for the Steiner tree problem, using a hypergraph-based linear programming relaxation. They also upper-bounded its integrality gap by 1.55. We describe a shorter proof of the same integrality gap bound, by applying some of their techniques to a randomized loss-contracting algorithm

    Optimal Lower Bounds for Universal and Differentially Private Steiner Tree and TSP

    Get PDF
    Given a metric space on n points, an {\alpha}-approximate universal algorithm for the Steiner tree problem outputs a distribution over rooted spanning trees such that for any subset X of vertices containing the root, the expected cost of the induced subtree is within an {\alpha} factor of the optimal Steiner tree cost for X. An {\alpha}-approximate differentially private algorithm for the Steiner tree problem takes as input a subset X of vertices, and outputs a tree distribution that induces a solution within an {\alpha} factor of the optimal as before, and satisfies the additional property that for any set X' that differs in a single vertex from X, the tree distributions for X and X' are "close" to each other. Universal and differentially private algorithms for TSP are defined similarly. An {\alpha}-approximate universal algorithm for the Steiner tree problem or TSP is also an {\alpha}-approximate differentially private algorithm. It is known that both problems admit O(logn)-approximate universal algorithms, and hence O(log n)-approximate differentially private algorithms as well. We prove an {\Omega}(logn) lower bound on the approximation ratio achievable for the universal Steiner tree problem and the universal TSP, matching the known upper bounds. Our lower bound for the Steiner tree problem holds even when the algorithm is allowed to output a more general solution of a distribution on paths to the root.Comment: 14 page

    Approximability of Sparse Integer Programs

    Get PDF
    The main focus of this paper is a pair of new approximation algorithms for certain integer programs. First, for covering integer programs {min cx:Ax≥b,0≤x≤d} where A has at most k nonzeroes per row, we give a k-approximation algorithm. (We assume A,b,c,d are nonnegative.) For any k≥2 and ε>0, if P≠NP this ratio cannot be improved to k−1−ε, and under the unique games conjecture this ratio cannot be improved to k−ε. One key idea is to replace individual constraints by others that have better rounding properties but the same nonnegative integral solutions; another critical ingredient is knapsack-cover inequalities. Second, for packing integer programs {max cx:Ax≤b,0≤x≤d} where A has at most k nonzeroes per column, we give a (2k 2+2)-approximation algorithm. Our approach builds on the iterated LP relaxation framework. In addition, we obtain improved approximations for the second problem when k=2, and for both problems when every A ij is small compared to b i. Finally, we demonstrate a 17/16-inapproximability for covering integer programs with at most two nonzeroes per colum

    Adaptive Boolean Monotonicity Testing in Total Influence Time

    Get PDF
    Testing monotonicity of a Boolean function f:{0,1}^n -> {0,1} is an important problem in the field of property testing. It has led to connections with many interesting combinatorial questions on the directed hypercube: routing, random walks, and new isoperimetric theorems. Denoting the proximity parameter by epsilon, the best tester is the non-adaptive O~(epsilon^{-2}sqrt{n}) tester of Khot-Minzer-Safra (FOCS 2015). A series of recent results by Belovs-Blais (STOC 2016) and Chen-Waingarten-Xie (STOC 2017) have led to Omega~(n^{1/3}) lower bounds for adaptive testers. Reducing this gap is a significant question, that touches on the role of adaptivity in monotonicity testing of Boolean functions. We approach this question from the perspective of parametrized property testing, a concept recently introduced by Pallavoor-Raskhodnikova-Varma (ACM TOCT 2017), where one seeks to understand performance of testers with respect to parameters other than just the size. Our result is an adaptive monotonicity tester with one-sided error whose query complexity is O(epsilon^{-2}I(f)log^5 n), where I(f) is the total influence of the function. Therefore, adaptivity provably helps monotonicity testing for low influence functions
    • …
    corecore